Non-homogeneous Markov Decision Processes with a Constraint
نویسندگان
چکیده
منابع مشابه
Markov Decision Processes with Arbitrary Reward Processes
We consider a learning problem where the decision maker interacts with a standard Markov decision process, with the exception that the reward functions vary arbitrarily over time. We show that, against every possible realization of the reward process, the agent can perform as well—in hindsight—as every stationary policy. This generalizes the classical no-regret result for repeated games. Specif...
متن کاملNumerical Solution of Non-Homogeneous Markov Processes through Uniformization
Numerical algorithms based on uniformization have been proven to be numerically stable and computationally at tractive to compute transient state distributions in ho mogeneous continuous time Markov chains Recently Van Dijk van Dijk formulated uniformization for non homogeneous Markov processes and it is of interest to investigate numerical algorithms based on uniformiza tion for non homogeneou...
متن کاملBounded Parameter Markov Decision Processes Bounded Parameter Markov Decision Processes
In this paper, we introduce the notion of a bounded parameter Markov decision process as a generalization of the traditional exact MDP. A bounded parameter MDP is a set of exact MDPs speciied by giving upper and lower bounds on transition probabilities and rewards (all the MDPs in the set share the same state and action space). Bounded parameter MDPs can be used to represent variation or uncert...
متن کاملLearning Qualitative Markov Decision Processes Learning Qualitative Markov Decision Processes
To navigate in natural environments, a robot must decide the best action to take according to its current situation and goal, a problem that can be represented as a Markov Decision Process (MDP). In general, it is assumed that a reasonable state representation and transition model can be provided by the user to the system. When dealing with complex domains, however, it is not always easy or pos...
متن کاملAccelerated decomposition techniques for large discounted Markov decision processes
Many hierarchical techniques to solve large Markov decision processes (MDPs) are based on the partition of the state space into strongly connected components (SCCs) that can be classified into some levels. In each level, smaller problems named restricted MDPs are solved, and then these partial solutions are combined to obtain the global solution. In this paper, we first propose a novel algorith...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Mathematical Analysis and Applications
سال: 1997
ISSN: 0022-247X
DOI: 10.1006/jmaa.1997.5610